Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting
نویسندگان
چکیده
Deep Neural Networks are vulnerable to adversarial attacks. Among many defense strategies, training with untargeted attacks is one of the most effective methods. Theoretically, perturbation in can be added along arbitrary directions and predicted labels should unpredictable. However, we find that naturally imbalanced inter-class semantic similarity makes those hard-class pairs become virtual targets each other. This study investigates impact such closely-coupled classes on develops a self-paced reweighting strategy accordingly. Specifically, propose upweight pair losses model optimization, which prompts learning discriminative features from hard classes. We further incorporate term quantify consistency training, greatly boosts robustness. Extensive experiments show proposed method achieves superior robustness performance over state-of-the-art defenses against wide range The code SPAT published at https://github.com/puerrrr/Self-Paced-Adversarial-Training.
منابع مشابه
Parseval Networks: Improving Robustness to Adversarial Examples
We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...
متن کاملParseval Networks: Improving Robustness to Adversarial Examples
We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...
متن کاملImproving Network Robustness against Adversarial Attacks with Compact Convolution
Though Convolutional Neural Networks (CNNs) have surpassed human-level performance on tasks such as object classification and face verification, they can easily be fooled by adversarial attacks. These attacks add a small perturbation to the input image that causes the network to mis-classify the sample. In this paper, we focus on neutralizing adversarial attacks by compact feature learning. In ...
متن کاملImproving DNN Robustness to Adversarial Attacks using Jacobian Regularization
Deep neural networks have lately shown tremendous performance in various applications including vision and speech processing tasks. However, alongside their ability to perform these tasks with such high accuracy, it has been shown that they are highly susceptible to adversarial attacks: a small change of the input would cause the network to err with high confidence. This phenomenon exposes an i...
متن کاملDeep Adversarial Robustness
Deep learning has recently contributed to learning state-of-the-art representations in service of various image recognition tasks. Deep learning uses cascades of many layers of nonlinear processing units for feature extraction and transformation. Recently, researchers have shown that deep learning architectures are particularly vulnerable to adversarial examples, inputs to machine learning mode...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i12.26738